TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan! www.turbotype.app/
you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.
It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.
It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast. It's realtime and very high quality , just requires some pre-processing of a full face scan
very quickly.. just get pinokio and install live portrait.. it's only 3 clicks. Also fooocus and stabilty cascade is great for midjourney quality ai stuff.
wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk! We're one step closer to creating movie scenes.
I always tooo lazy to comment or like, subscribing is almost impossible but I did it all today on you video, just excellent. I was thinking from programmer point of view, your video thought me a lot today. Thanks mate from India.
there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue. huuuge step up for open-source nonetheless!
This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.
This is FANTASTIC! It's going to take a minute to "git" everything (the dependencies) installed and working properly (Mac OS Monterey), but this is Open Source, so I have nothing to complain about. I'll do whatever it takes and how ever long it takes to nail this one. Thanks for the tutorial!!
I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.
Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.
can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try. example: 1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for .. 2. AI x2: also free and has no tokens, now you can...
Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.
I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?
I hope future versions of LivePortrait can do the entire body - or at least the upper part, including arms and hands. That'd be such a breakthrough in motion capturing technology!
At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format. What is going on?
I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤
i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.
Exactly this, i guess the people who can make this kind of stuff are just used to doing stuff the hard way. A zipped .exe on a cloudstorage would have made this a no brainer.
There is this other program called chatgpt, I never used code or did any programming before but installed linux on an old computer as it takes up 90% less resources to run then windows and i just tell chatgpt what i want to do...copy command...paste command...and ive built custom personal apps somehow without knowing what im doing, if anything i just ask chatgpt or i copy and paste the output on the terminal and tell chat gpt to translate into english. If you have not yet experienced life apart from windows you will find that if you just jump out of that window and take a walk with the penguin into the rabbit hole there is a whole other world down there, vast and beautiful that is so free. If you take the plunge you will find the true meaning freedom in pc world and eventually come to the conclusion that you never knew that you have been locked up for so long behind that window that was preventing you from seeing what else is out there and all you had to do was open it and not be afraid to jump out. lol
There sure is money to be made by creating 'easy' installers for (would be) popular applications like this that comprise a lot of dependencies. But it'll be a lot of work and a totally different expertise than giving photos an animated face.
Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes. My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul. Well. Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself. In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id. I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.
@@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.
People with autism are unable to read emotions from facial expressions like normal people . this technology can help theme a lot . you can exagerate or magnify facial expression so they can understand you and be able to communicate effectively . for example a kid can understand wether his mom is mad or not.
@@KryzysX We already have real time face trackers, real time deepfakes, and we are definitely not that far off from getting real time generative AI of this quality.
How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.
Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits. My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!
Yep, this install process is an effn nightmare, and I have Windows 10, Python, and Linux Mint installed under Hyper-V. One thing is for sure, AI projects have the absolute least intelligent user interfaces and installations methods. It's almost laughable how universally bad and fragmented they all are, and how none of them talk to each other. Who would release software with this many environment dependencies. Once he got to having to edit a path to tell Windows where conda was... I was like I've been down this road, editing the env path variable never works for me, this is fail.
I'm right there with ya. Someone needs to have a serious talk with these software devs about simple user interfaces and self installing programs. I'll get excited when the interface says "Drop target face here" , "Drop source video here" , and then a big red button that says "GO". Until then, "Gee-whiz, that's interesting."
I agree, I followed the instructions but I hit a wall after my computer could find the git. Very cool, program, but I will have to wait for it to be simplified. The problem is that usually happens when someone monetizes it with subscription fees.
For starters, the people creating those demos/projects aren't UI designers, nor do they create the demos for widespread or commercial use, or even create them with the end-user in mind at all. They create them as part of their scientific papers and studies. Meaning it's a somewhat beautified version of their messy lab experiments. The fact they then release those projects as open source for everyone to use is something extra that we should be grateful about, not act entitled and complain because they didn't make a super-easy, no-code UI version for the most braindead of users. Of course the installation complexity varies, but I wouldn't say _any_ of those AI projects' demos are especially difficult to install. The absolutely vast majority of such projects are in Python, so once you get the hang of it it's easier. Also having Python environment managers (Conda/Mamba, etc) and git preinstalled usually is half of the work required. Besides that, if it's something specific you need help with, just open an issue in github and ask for help, people usually will answer you, as long as your question isn't "please hold my hand throughout the whole install process".
Lol. Yeah, I love this stuff, but unfortunately, I'm like you. I sit out the first few months of new stuff now, waiting for some paid site to hopefully pick it up, and then I just give them my money. Lol.
Man this seems insane. I love your findings. Will test it tonight! Just one thing! If you could add the timer or how long it took to process that would be very much appreciated 🙏
99% of people liking this will never be able to get this working, even if they try. They like it immediately after watching RUclips videos, and never do anything with it
I sadly agree with you. I think it's the way he presents the information. For example, he starts of saying do this quick thing, and then it turns into a multi-step process that should have been explained in another video. Saying something is quick, and then expanding into something that many people would feel is not quick, will ultimately turn people off. If you prep them ahead of time that it will be a daunting task, they will mentally be ready for it, or will watch the video when they have the appropriate amount of time.
I'm not even trying. The use case is still very limited, the driving video need to be really clear with minimum shoulder movements, but to get it working is actually pretty 'simple' if you already knew how to use Comfy UI and have used InsightFace before, at the bottom of the git page you can find the community resource that you can use to get it working with Comfy UI.
@@biggerbitcoin5126 lmao he literally gave you a step by step, most technical things don't go anywhere near as far as he went to describe how to do it. Are you able to understand basic English? Do you have comprehension skills? This isn't even a technical vs non-technical issue at this point.
Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.
I heard several individuals at Salt Shack saying that the market is ripe, so I'm thinking about investing some money in stocks. Is it a good time to buy stock? I have almost $545K in equity from the sale of my property, but I'm unsure what to do with it. Should I buy shares now or wait for a better opportunity?
Of course, but you shouldn't enter the market blindly just because there are prospects there. I'll urge you to get professional assistance in order to comprehend the possible aspects that could contribute to your financial growth.
Many people underestimate the need of a financial advisor until they are burned by their own emotions. I recall that after a long divorce, I needed a good boost to keep my firm afloat, so I looked for licensed consultants and found someone with the highest qualifications. Despite inflation, she has helped me increase my reserve from $275k to $850k.
This is definitely considerable! think you could suggest any professional/advisors i can get on the phone with? I'm in dire need of proper portfolio allocation.
My CFA ’Leah Foster Alderman’, a renowned figure in her line of work. I recommend researching her credentials further. She has many years of experience and is a valuable resource for anyone looking to navigate the financial market.
I just googled her and I'm really impressed with her credentials, I reached out to her since I need all the assistance I can get. I just scheduled a caII.
I followed the steps exactly, and was all fine until the 12:13 mark. CMD still told me 'conda' is not recognized as an internal or external command, operable program or batch file. Opening CMD and doing conda --version showed that it was installed though.
It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.
Bro i think u should also start basic tutorials about python pip requirements installations and anaconda git and basics of using ai locally so atleast people can solve their errors while running it like in your pc everything is already installed and here after pip install requirements its saying tourch isn't available
this looks awesome. would you say it works better by installing it locally? also if you install it locally do you still have to pay for a membership or is there a one time fee to buy it forever lol
it depends on your hardware. if you have a good cuda gpu, it runs great. if you, it's best to run it online via huggingface or other platforms. it's free to install locally
I literally thought, "This would be amazing if it could only do animals too", and 30 seconds later.... he shows it doing animals too. Can't wait to try this one out.
I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔
Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome
Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?
Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!
Just a question.. if the source video is already talking in the video and then if you put Driving video full of talk and expressions as well.. then what would be the outcome..?? Will it mask the driving video talk on a source video talk?? and thanks for the video and information..👍👍👍
TurboType is great! was looking for something like that. Btw, Your input files are very small. How long does it take to render? Can we change a 1080p video that is of 30seconds? the input would be around 100mb
Any idea why I am getting this error, everything else worked up until this point: C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9 'conda' is not recognized as an internal or external command, operable program or batch file.
Add .18 to the end conda create -n LivePortrait python==3.9.18 I almost missed it myself, but it's in text at the bottom of the screen, correcting the one from GitHub
Amazing walkthrough! Thank you! I'm having an issue where the output video bobbles around and shakes a bit. I'm not seeing this in your examples. Any ideas on why or how a shake is happening on the output?
Hello dear. thank you for sharing. I got this error and I didn't figure it out. could you help me? thanks in advance...."The requested GPU duration (240s) is larger than the maximum allowed retry in -1 day, 23:59:59"
This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!
TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan!
www.turbotype.app/
Thanks man, amazing video and I will try this hkey manager.
Lowkey one of the best AI youtubers
Free and open source? No such thing. If it was so, installing it wouldn't be needed. Double click and it would work.
@@minhuang8848
Harvesting info is the goal today. For that Microsoft made Copilot AI and Recall. A spyware within the operating system.
Will it work on IOS devices?
i normally dont mess with this stuff until there is an involved interface, super excited for this stuff to be working open source
you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.
@@SVAFnemesis what, you don't like typing in Discord? heretic.
@@English_Lessons_Pre-Int_Interm can you please carefully read and understand my comment.
@@SVAFnemesis I think, they were being sarcastic
First thing I think of is a potential VR application. This would be in real time , your whole expression would be projected on your VR avatar.
It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.
It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast.
It's realtime and very high quality , just requires some pre-processing of a full face scan
@@iankrasnow5383 with the speed of tech innovations in the last few years, if they decide to work toward this it won't be that long
video calling
@@iankrasnow5383 A lot of time. Still not a consumer friendly implementation. Might take some more time to be realtime ready. Fingers crossed.
How soon can I get a Hogwarts painting of my dead grandmother to tell me to wash my hands every hour?
Now, if you want
now if you have the time and technically expertise or enough money to pay someone else to do it.
very quickly.. just get pinokio and install live portrait.. it's only 3 clicks. Also fooocus and stabilty cascade is great for midjourney quality ai stuff.
You can fast track this with Runway.
wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk!
We're one step closer to creating movie scenes.
exactly!
@@user-cz9bl6jp8b I don't know. I never tried it. I was only commenting on what I saw in the video.
@@user-cz9bl6jp8b I'd like to know this too.
I always tooo lazy to comment or like, subscribing is almost impossible but I did it all today on you video, just excellent. I was thinking from programmer point of view, your video thought me a lot today. Thanks mate from India.
What happens if the source sticks out a tongue?
The entire internet crashes.
there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue.
huuuge step up for open-source nonetheless!
Harambe is resurrected
doesnt work
What is bro planning to do
Amazing ! Thanks for sharing Live Portrait ! Also thanks for the TurboType tool too ! Amazing and practical !
YOU ARE REALLY GIVING ME ALL THESE THINGS whenever I NEEDED THIS TO Make my animation!!!!
Nice can I see your animation when you're finished?
Good luck!
This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.
You're very welcome!
were you able to run it on a group photo to animate multiple faces? i was unable to
@@abhishekpatwal8576 you can uncheck do crop and it does try but the result isnt there yet
This is insanely fantastic for controlling expressions keep making your great videos.💯Top Notch.😁
thanks!
always appreciate you show every single steps to install. It's very helpful for person who is not familiar with any codes
my pleasure!
Create a source video from the movie "The Mask". When Jim's character becomes freaky with his eyes and mouth.
This is FANTASTIC! It's going to take a minute to "git" everything (the dependencies) installed and working properly (Mac OS Monterey), but this is Open Source, so I have nothing to complain about. I'll do whatever it takes and how ever long it takes to nail this one. Thanks for the tutorial!!
Absolutely incredible!
yes!
I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.
glad you got it to work!
which py version did you use ?
@@UserGram-1 What I did was follow the tutorial in this video and after four or five failed attempts it ended up working.
Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.
I am right alongside you brother!
can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try.
example:
1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for ..
2. AI x2: also free and has no tokens, now you can...
Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.
China + open source + free = I'm gonna steal your life now.
@@kirtisozgur yeah, pretty much
Thanks for sharing!
I also noticed when you add a video it says "uploading video" which seemed a little sus for a local install.
Never trust a Chinese company that wants your data
I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?
type this : conda init
it will ask you to close the cmd
then restart and do the same instructions
after that type : conda activate LivePortrait
@@Bandaniji24 Thank you! It worked. 🤝
Thank you!
I hope future versions of LivePortrait can do the entire body - or at least the upper part, including arms and hands. That'd be such a breakthrough in motion capturing technology!
I could fix the lip sync of the Teenage Mutant Ninja Turtles movie with this!
I can think of a hundred ways to use this creatively but i have only got 4GB VRAM.
@@eccentricballad9039 Use Google colab👀
You mean the original 1990 movie? Never noticed
i want to replace young jeff bridges and CLU in tron legacy!
heroic af!
thats crazy! i wanted to create a yt channel for so long but didnt want to use my own voice nor face. i can do it now :)
That's a great idea Good luck to you and your channel! Did you need a Chinese phone number to run this app?
@monday304 LivePortrait doesn't require any number
At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format.
What is going on?
I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤
It looks weird in motion, but if u pause the expressions at any point during its animation, it looks good and natural.
But it looked completely unnatural to only move the head. The shoulders stayed perfectly still throughout.
The video of a "girl rotating" at 4:10 is a Vermeer painting that someone has used AI to animate.
I have a basic laptop without the NVidia graphic card, can I use this as well?
Maybe. The bulk of the computational work is on the server side.
Makes me cry as my oldest friend my computer could not run good stuff like this.
love your videos, keep up the good work :D
Thanks!
i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.
Those people who can create those type of application in code , why they don't create an installation exe type ?
because exe is a windows thingie, rest of the world uses a real unix operating system like mac and linux. ;)
Exactly this, i guess the people who can make this kind of stuff are just used to doing stuff the hard way.
A zipped .exe on a cloudstorage would have made this a no brainer.
Laziness.
There is this other program called chatgpt, I never used code or did any programming before but installed linux on an old computer as it takes up 90% less resources to run then windows and i just tell chatgpt what i want to do...copy command...paste command...and ive built custom personal apps somehow without knowing what im doing, if anything i just ask chatgpt or i copy and paste the output on the terminal and tell chat gpt to translate into english. If you have not yet experienced life apart from windows you will find that if you just jump out of that window and take a walk with the penguin into the rabbit hole there is a whole other world down there, vast and beautiful that is so free. If you take the plunge you will find the true meaning freedom in pc world and eventually come to the conclusion that you never knew that you have been locked up for so long behind that window that was preventing you from seeing what else is out there and all you had to do was open it and not be afraid to jump out. lol
There sure is money to be made by creating 'easy' installers for (would be) popular applications like this that comprise a lot of dependencies. But it'll be a lot of work and a totally different expertise than giving photos an animated face.
Very thorough tutorial and a very good project to cover!
Thanks!
I love that you walk through the installation. Thank you!
you're welcome!
3:16 she is just absolutely incredible. This easily exceeds cartoon animations
*edit* alright found her: rayray facedancing facial expressions
This is gonna be a nightmare soon enough ...
Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes.
My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul.
Well.
Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself.
In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id.
I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.
@@nicktaylor5264 i want what youre having
@@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.
@@nicktaylor5264☠️☠️☠️
Sure, if you have no idea about tech. This AI is amazing and it's only gonna get better.
This is going to be so great for video editing instead of having to animate facial animations for characters we could just use this software
yes!
what do you recommend for animation software
People with autism are unable to read emotions from facial expressions like normal people . this technology can help theme a lot . you can exagerate or magnify facial expression so they can understand you and be able to communicate effectively .
for example a kid can understand wether his mom is mad or not.
very interesting use case. thanks for sharing!
I don't think it would be feasible to get it done in real time
"like normal people".
It's been a while since a comment made feel so "abnormal" 😐
May be simpler to use AI to tell them what emotions someone has.
@@KryzysX We already have real time face trackers, real time deepfakes, and we are definitely not that far off from getting real time generative AI of this quality.
really impressed and highly amused by the face expression acting
How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.
glad you got it to work. they will release the video feature soon github.com/KwaiVGI/LivePortrait/issues/27
Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits.
My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!
Yep, this install process is an effn nightmare, and I have Windows 10, Python, and Linux Mint installed under Hyper-V.
One thing is for sure, AI projects have the absolute least intelligent user interfaces and installations methods. It's almost laughable how universally bad and fragmented they all are, and how none of them talk to each other.
Who would release software with this many environment dependencies. Once he got to having to edit a path to tell Windows where conda was... I was like I've been down this road, editing the env path variable never works for me, this is fail.
I'm right there with ya. Someone needs to have a serious talk with these software devs about simple user interfaces and self installing programs. I'll get excited when the interface says "Drop target face here" , "Drop source video here" , and then a big red button that says "GO". Until then, "Gee-whiz, that's interesting."
I agree, I followed the instructions but I hit a wall after my computer could find the git. Very cool, program, but I will have to wait for it to be simplified.
The problem is that usually happens when someone monetizes it with subscription fees.
@@BT-vu2ek this would take the devs wayyy to long to do
For starters, the people creating those demos/projects aren't UI designers, nor do they create the demos for widespread or commercial use, or even create them with the end-user in mind at all. They create them as part of their scientific papers and studies. Meaning it's a somewhat beautified version of their messy lab experiments. The fact they then release those projects as open source for everyone to use is something extra that we should be grateful about, not act entitled and complain because they didn't make a super-easy, no-code UI version for the most braindead of users.
Of course the installation complexity varies, but I wouldn't say _any_ of those AI projects' demos are especially difficult to install. The absolutely vast majority of such projects are in Python, so once you get the hang of it it's easier. Also having Python environment managers (Conda/Mamba, etc) and git preinstalled usually is half of the work required. Besides that, if it's something specific you need help with, just open an issue in github and ask for help, people usually will answer you, as long as your question isn't "please hold my hand throughout the whole install process".
Lol. Yeah, I love this stuff, but unfortunately, I'm like you. I sit out the first few months of new stuff now, waiting for some paid site to hopefully pick it up, and then I just give them my money. Lol.
THIS IS INSANE... All wanna be yahoo boys from Nigeria Say Hi. Your work is so easy now haha
Why are they using Frank Tufano’s image? 😂
thanks man! you deserve much more subs and likes :]
Thanks!
Thanks! It's absoluetly awesome as nodes in comfyUI!
oh, is there a node for this already?
@@theAIsearch Yup ^^
The only catch is...
Proceeds to list a thing that 99.99% of us won't be able to get round
what is that thing??? o_O
@@sickvr7680 You have to have a Chinese phone number
@@sickvr7680 pay for the subscription and graphics card. It is enough to grow 5 children in Africa.
Stop being lazy!!!
@@fzigunov Lazy? I wouldn't know how to get a Chinese phone number, would you? And that's before jumping through the myriad hoops to get it installed.
Great demo! Curious as to your CPU / GPU / ram configuration that you ran this on?
Thanks. RTX 5000 ada, 16g vram. cpu is intel i7, but i dont think that matters
I know a few people that I wish I could set their Target Lip Open Ratio to zero. Just sayin'.
Can you make a video on your hardware? That setup you have looks cool
Man this seems insane. I love your findings. Will test it tonight!
Just one thing! If you could add the timer or how long it took to process that would be very much appreciated 🙏
Thanks. For a 10s video, it took maybe 1-2min to generate. Very quick compared to other tools
@@theAIsearch thank you very much! That’s waaay faster than I expected! 🤯
99% of people liking this will never be able to get this working, even if they try. They like it immediately after watching RUclips videos, and never do anything with it
I sadly agree with you. I think it's the way he presents the information. For example, he starts of saying do this quick thing, and then it turns into a multi-step process that should have been explained in another video. Saying something is quick, and then expanding into something that many people would feel is not quick, will ultimately turn people off.
If you prep them ahead of time that it will be a daunting task, they will mentally be ready for it, or will watch the video when they have the appropriate amount of time.
I'm not even trying. The use case is still very limited, the driving video need to be really clear with minimum shoulder movements, but to get it working is actually pretty 'simple' if you already knew how to use Comfy UI and have used InsightFace before, at the bottom of the git page you can find the community resource that you can use to get it working with Comfy UI.
I actually got this working in ComfyUI, but my results aren't that flash, unfortunately. Might have to try this in a venv... 😕
It's too technical imo, like this would take ages to dissect
@@biggerbitcoin5126 lmao he literally gave you a step by step, most technical things don't go anywhere near as far as he went to describe how to do it. Are you able to understand basic English? Do you have comprehension skills? This isn't even a technical vs non-technical issue at this point.
Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.
I heard several individuals at Salt Shack saying that the market is ripe, so I'm thinking about investing some money in stocks. Is it a good time to buy stock? I have almost $545K in equity from the sale of my property, but I'm unsure what to do with it. Should I buy shares now or wait for a better opportunity?
Of course, but you shouldn't enter the market blindly just because there are prospects there. I'll urge you to get professional assistance in order to comprehend the possible aspects that could contribute to your financial growth.
Many people underestimate the need of a financial advisor until they are burned by their own emotions. I recall that after a long divorce, I needed a good boost to keep my firm afloat, so I looked for licensed consultants and found someone with the highest qualifications. Despite inflation, she has helped me increase my reserve from $275k to $850k.
This is definitely considerable! think you could suggest any professional/advisors i can get on the phone with? I'm in dire need of proper portfolio allocation.
My CFA ’Leah Foster Alderman’, a renowned figure in her line of work. I recommend researching her credentials further. She has many years of experience and is a valuable resource for anyone looking to navigate the financial market.
I just googled her and I'm really impressed with her credentials, I reached out to her since I need all the assistance I can get. I just scheduled a caII.
You are guiding like a god. Thanks for instructions
no problem
uwu
😃
Amazing feature 🤩Greatly appreciate doing this super simple video guide on how to use this tool. Game changer! Thanks so muchhhh
enjoy!
Available in pinokio?
This is awesome. AI is really advancing fast.
Too fast
The cat chopping chicken was awesome!
everything was going well until I tried pasted in the gradio interface I get "No module named 'torch'" please help
Wah cool!!!! Can you do Xi?
Thanks for the tutorial...I was finally able to install miniconda without any problems.
I followed the steps exactly, and was all fine until the 12:13 mark. CMD still told me 'conda' is not recognized as an internal or external command, operable program or batch file.
Opening CMD and doing conda --version showed that it was installed though.
open a new cmd and try again
It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.
Thanks for sharing
That's it, I'm never ever gonna believe what I see online or on digital media anymore.
3:15 the images don't follow the "driving video" eyes at all. It cannot reproduce cross-eyed or side eyes at all.
Bro i think u should also start basic tutorials about python pip requirements installations and anaconda git and basics of using ai locally so atleast people can solve their errors while running it like in your pc everything is already installed and here after pip install requirements its saying tourch isn't available
Facefusion is the best choice since its a stable diffusion extension
Best channel 🔥🔥🔥
Thanks!
this looks awesome. would you say it works better by installing it locally? also if you install it locally do you still have to pay for a membership or is there a one time fee to buy it forever lol
it depends on your hardware. if you have a good cuda gpu, it runs great. if you, it's best to run it online via huggingface or other platforms. it's free to install locally
@@theAIsearch thank you!
Wow, you always present great advances in AI, and open source, I am very grateful for your channel, I will try this soon
Thanks!
Great tutorial! Can the driving video be longer than a few seconds, like 5 minutes? Thank you.
I literally thought, "This would be amazing if it could only do animals too", and 30 seconds later.... he shows it doing animals too. Can't wait to try this one out.
have fun!
I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔
I hope someone else will show us how to do it. I want to know how to do that as well.
they will release the video feature 'in a few days' github.com/KwaiVGI/LivePortrait/issues/27
What is the output resolution of these videos?
Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome
And if I need to transfer a facial expression not to a video but to a photo, how can I do this?
Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?
insane use cases are coming
How can I run this LivePortrait, if i'm using an AMD GPU?
Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!
it will be released soon: github.com/KwaiVGI/LivePortrait/issues/27
The numa numa songs are gonna upgrade with this
it works only with GPU? bec torch give me some problems w the installation
can this be integrated in to blender to animate 3d faces, i imagine it would be easier for it to read 3d faces than 2d, right?
Just a question.. if the source video is already talking in the video and then if you put Driving video full of talk and expressions as well.. then what would be the outcome..??
Will it mask the driving video talk on a source video talk??
and thanks for the video and information..👍👍👍
good question. they haven't released the video feature yet, but thats a good thing to test out
Thank you. Excellent Video, very detailed and well explained. New subscriber
Thanks!
TurboType is great! was looking for something like that.
Btw, Your input files are very small. How long does it take to render? Can we change a 1080p video that is of 30seconds? the input would be around 100mb
Dell and Nvidia huh? Were one of those your Chinese friends that gave you the code :D
Can it run in realtime with a webcam feed?
how do i create a file like exe to just open it up later? i could use it first time but when i closed the tap and cmd i wasnt able to open again...
You could create a .bat instead and run commands there.
Maybe auto-py-to-exe
Any idea why I am getting this error, everything else worked up until this point:
C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9
'conda' is not recognized as an internal or external command,
operable program or batch file.
I have the same issue
same issue here
check conda version of python and install it. conda not recognized means it needs this program to run app
Add .18 to the end
conda create -n LivePortrait python==3.9.18
I almost missed it myself, but it's in text at the bottom of the screen, correcting the one from GitHub
you didn't set your env path correctly. Can fix that, or manually go to the path before issuing the commands
how do you do the sample you showed of video transposed to a video source (4:17)? Thanks!
they will release it soon github.com/KwaiVGI/LivePortrait/issues/27
@@theAIsearch cool. thanks for the reply! :D
Very cool, thank you for sharing ❤
Too bad it won't work on VIDEOS or Multiple Faces like in their examples.
I cant solve "CUDA is not compatible with Torch" error! which version with which version are compatible? I have Cuda 12.1
Can you do it in real time? There is a cam icon under the video input. What happens when the face turns away from the camera?
Amazing walkthrough! Thank you! I'm having an issue where the output video bobbles around and shakes a bit. I'm not seeing this in your examples. Any ideas on why or how a shake is happening on the output?
Hello dear. thank you for sharing. I got this error and I didn't figure it out. could you help me? thanks in advance...."The requested GPU duration (240s) is larger than the maximum allowed retry in -1 day, 23:59:59"
This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!
yes, is this what you're looking for? ruclips.net/video/rlnjcRP4oVc/видео.html
What is the minimum graphics specification as I keep getting a delay when trying to do voice sync